Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Fortson, Lucy; Crowston, Kevin; Kloetzer, Laure; Ponti, Marisa (Ed.)Citizen science has become a valuable and reliable method for interpreting and processing big datasets, and is vital in the era of ever-growing data volumes. However, there are inherent difficulties in the generating labels from citizen scientists, due to the inherent variability between the members of the crowd, leading to variability in the results. Sometimes, this is useful — such as with serendipitous discoveries, which corresponds to rare/unknown classes in the data — but it might also be due to ambiguity between classes. The primary issue is then to distinguish between the intrinsic variability in the dataset and the uncertainty in the citizen scientists’ responses, and leveraging that to extract scientifically useful relationships. In this paper, we explore using a neural network to interpret volunteer confusion across the dataset, to increase the purity of the downstream analysis. We focus on the use of learned features from the network to disentangle feature similarity across the classes, and the ability of the machines’ “attention” in identifying features that lead to confusion. We use data from Jovian Vortex Hunter, a citizen science project to study vortices in Jupiter’s atmosphere, and find that the latent space from the model helps effectively identify different sources of image-level features that lead to low volunteer consensus. Furthermore, the machine’s attention highlights features corresponding to specific classes. This provides meaningful image-level feature-class relationships, which is useful in our analysis for identifying vortex-specific features to better understand vortex evolution mechanisms. Finally, we discuss the applicability of this method to other citizen science projects.more » « lessFree, publicly-accessible full text available December 9, 2025
- 
            Fortson, Lucy; Crowston, Kevin; Kloetzer, Laure; Ponti, Marisa (Ed.)In the era of rapidly growing astronomical data, the gap between data collection and analysis is a significant barrier, especially for teams searching for rare scientific objects. Although machine learning (ML) can quickly parse large data sets, it struggles to robustly identify scientifically interesting objects, a task at which humans excel. Human-in-the-loop (HITL) strategies that combine the strengths of citizen science (CS) and ML offer a promising solution, but first, we need to better understand the relationship between human- and machine-identified samples. In this work, we present a case study from the Galaxy Zoo: Weird & Wonderful project, where volunteers inspected ~200,000 astronomical images—processed by an ML-based anomaly detection model—to identify those with unusual or interesting characteristics. Volunteer-selected images with common astrophysical characteristics had higher consensus, while rarer or more complex ones had lower consensus. This suggests low-consensus choices shouldn’t be dismissed in further explorations. Additionally, volunteers were better at filtering out uninteresting anomalies, such as image artifacts, which the machine struggled with. We also found that a higher ML-generated anomaly score that indicates images’ low-level feature anomalousness was a better predictor of the volunteers’ consensus choice. Combining a locus of high volunteer-consensus images within the ML learnt feature space and anomaly score, we demonstrated a decision boundary that can effectively isolate images with unusual and potentially scientifically interesting characteristics. Using this case study, we lay important guidelines for future research studies looking to adapt and operationalize human-machine collaborative frameworks for efficient anomaly detection in big data.more » « lessFree, publicly-accessible full text available December 9, 2025
- 
            Abstract We study the evolution of the bar fraction in disk galaxies between 0.5 < z < 4.0 using multiband colored images from JWST Cosmic Evolution Early Release Science Survey (CEERS). These images were classified by citizen scientists in a new phase of the Galaxy Zoo (GZ) project called GZ CEERS. Citizen scientists were asked whether a strong or weak bar was visible in the host galaxy. After considering multiple corrections for observational biases, we find that the bar fraction decreases with redshift in our volume-limited sample (n= 398); from % at 0.5 <z< 1.0 to % at 3.0 < z < 4.0. However, we argue it is appropriate to interpret these fractions as lower limits. Disentangling real changes in the bar fraction from detection biases remains challenging. Nevertheless, we find a significant number of bars up toz= 2.5. This implies that disks are dynamically cool or baryon dominated, enabling them to host bars. This also suggests that bar-driven secular evolution likely plays an important role at higher redshifts. When we distinguish between strong and weak bars, we find that the weak bar fraction decreases with increasing redshift. In contrast, the strong bar fraction is constant between 0.5 <z< 2.5. This implies that the strong bars found in this work are robust long-lived structures, unless the rate of bar destruction is similar to the rate of bar formation. Finally, our results are consistent with disk instabilities being the dominant mode of bar formation at lower redshifts, while bar formation through interactions and mergers is more common at higher redshifts.more » « lessFree, publicly-accessible full text available June 30, 2026
- 
            Abstract Giant star-forming clumps (GSFCs) are areas of intensive star-formation that are commonly observed in high-redshift (z ≳ 1) galaxies but their formation and role in galaxy evolution remain unclear. Observations of low-redshift clumpy galaxy analogues are rare but the availability of wide-field galaxy survey data makes the detection of large clumpy galaxy samples much more feasible. Deep Learning (DL), and in particular Convolutional Neural Networks (CNNs), have been successfully applied to image classification tasks in astrophysical data analysis. However, one application of DL that remains relatively unexplored is that of automatically identifying and localizing specific objects or features in astrophysical imaging data. In this paper, we demonstrate the use of DL-based object detection models to localize GSFCs in astrophysical imaging data. We apply the Faster Region-based Convolutional Neural Network object detection framework (FRCNN) to identify GSFCs in low-redshift (z ≲ 0.3) galaxies. Unlike other studies, we train different FRCNN models on observational data that was collected by the Sloan Digital Sky Survey and labelled by volunteers from the citizen science project ‘Galaxy Zoo: Clump Scout’. The FRCNN model relies on a CNN component as a ‘backbone’ feature extractor. We show that CNNs, that have been pre-trained for image classification using astrophysical images, outperform those that have been pre-trained on terrestrial images. In particular, we compare a domain-specific CNN – ‘Zoobot’ – with a generic classification backbone and find that Zoobot achieves higher detection performance. Our final model is capable of producing GSFC detections with a completeness and purity of ≥0.8 while only being trained on ∼5000 galaxy images.more » « less
- 
            ABSTRACT We study the bar pattern speeds and corotation radii of 225 barred galaxies, using integral field unit data from MaNGA and the Tremaine–Weinberg method. Our sample, which is divided between strongly and weakly barred galaxies identified via Galaxy Zoo, is the largest that this method has been applied to. We find lower pattern speeds for strongly barred galaxies than for weakly barred galaxies. As simulations show that the pattern speed decreases as the bar exchanges angular momentum with its host, these results suggest that strong bars are more evolved than weak bars. Interestingly, the corotation radius is not different between weakly and strongly barred galaxies, despite being proportional to bar length. We also find that the corotation radius is significantly different between quenching and star-forming galaxies. Additionally, we find that strongly barred galaxies have significantly lower values for $$\mathcal {R}$$, the ratio between the corotation radius and the bar radius, than weakly barred galaxies, despite a big overlap in both distributions. This ratio classifies bars into ultrafast bars ($$\mathcal {R} \lt $$ 1.0; 11 per cent of our sample), fast bars (1.0 $$\lt \mathcal {R} \lt $$ 1.4; 27 per cent), and slow bars ($$\mathcal {R} \gt $$ 1.4; 62 per cent). Simulations show that $$\mathcal {R}$$ is correlated with the bar formation mechanism, so our results suggest that strong bars are more likely to be formed by different mechanisms than weak bars. Finally, we find a lower fraction of ultrafast bars than most other studies, which decreases the recently claimed tension with Lambda cold dark matter. However, the median value of $$\mathcal {R}$$ is still lower than what is predicted by simulations.more » « less
- 
            Abstract Mergers play a complex role in galaxy formation and evolution. Continuing to improve our understanding of these systems requires ever larger samples, which can be difficult (even impossible) to select from individual surveys. We use the new platform ESA Datalabs to assemble a catalog of interacting galaxies from the Hubble Space Telescope science archives; this catalog is larger than previously published catalogs by nearly an order of magnitude. In particular, we apply the Zoobot convolutional neural network directly to the entire public archive of HST F814W images and make probabilistic interaction predictions for 126 million sources from the Hubble Source Catalog. We employ a combination of automated visual representation and visual analysis to identify a clean sample of 21,926 interacting galaxy systems, mostly with z < 1. Sixty-five percent of these systems have no previous references in either the NASA Extragalactic Database or Simbad. In the process of removing contamination, we also discover many other objects of interest, such as gravitational lenses, edge-on protoplanetary disks, and “backlit” overlapping galaxies. We briefly investigate the basic properties of this sample, and we make our catalog publicly available for use by the community. In addition to providing a new catalog of scientifically interesting objects imaged by HST, this work also demonstrates the power of the ESA Datalabs tool to facilitate substantial archival analysis without placing a high computational or storage burden on the end user.more » « less
- 
            ABSTRACT We present detailed morphology measurements for 8.67 million galaxies in the DESI Legacy Imaging Surveys (DECaLS, MzLS, and BASS, plus DES). These are automated measurements made by deep learning models trained on Galaxy Zoo volunteer votes. Our models typically predict the fraction of volunteers selecting each answer to within 5–10 per cent for every answer to every GZ question. The models are trained on newly collected votes for DESI-LS DR8 images as well as historical votes from GZ DECaLS. We also release the newly collected votes. Extending our morphology measurements outside of the previously released DECaLS/SDSS intersection increases our sky coverage by a factor of 4 (5000–19 000 deg2) and allows for full overlap with complementary surveys including ALFALFA and MaNGA.more » « less
- 
            ABSTRACT Astronomers have typically set out to solve supervised machine learning problems by creating their own representations from scratch. We show that deep learning models trained to answer every Galaxy Zoo DECaLS question learn meaningful semantic representations of galaxies that are useful for new tasks on which the models were never trained. We exploit these representations to outperform several recent approaches at practical tasks crucial for investigating large galaxy samples. The first task is identifying galaxies of similar morphology to a query galaxy. Given a single galaxy assigned a free text tag by humans (e.g. ‘#diffuse’), we can find galaxies matching that tag for most tags. The second task is identifying the most interesting anomalies to a particular researcher. Our approach is 100 per cent accurate at identifying the most interesting 100 anomalies (as judged by Galaxy Zoo 2 volunteers). The third task is adapting a model to solve a new task using only a small number of newly labelled galaxies. Models fine-tuned from our representation are better able to identify ring galaxies than models fine-tuned from terrestrial images (ImageNet) or trained from scratch. We solve each task with very few new labels; either one (for the similarity search) or several hundred (for anomaly detection or fine-tuning). This challenges the longstanding view that deep supervised methods require new large labelled data sets for practical use in astronomy. To help the community benefit from our pretrained models, we release our fine-tuning code zoobot. Zoobot is accessible to researchers with no prior experience in deep learning.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
